Artificial Intelligence Regulation in the European Union

An overview as per December 2021

Sigrid Keydana

Agenda


  1. Privacy- and human rights-related legislation: Foundations

  2. Privacy- and human rights-related legislation: Drafts / Proposals (as per 12/2021)

  3. The Proposed Artificial Intelligence Act

European Convention on Human Rights (1953)


  • Issued by the Council of Europe (founded in 1949)

  • Legal institution: European Court of Human Rights (ECtHR, Strasbourg).


Article 8 – Right to respect for private and family life

Everyone has the right to respect for his private and family life, his home and his correspondence.

There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

Charter of Fundamental Rights of the European Union (2009)

  • Effective as of entry into force of Treaty of Lisbon (2009)

  • Enforced by: Court of Justice of the European Union (CJEU, Luxembourg)

Article 7 - Respect for private and family life
Everyone has the right to respect for his or her private and family life, home and communications.

Article 8 - Protection of personal data

  1. Everyone has the right to the protection of personal data concerning him or her.

  2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.

  3. Compliance with these rules shall be subject to control by an independent authority.

General Data Protection Regulation (GDPR, 2018)

  • applies whenever personal data are collected, used, or stored

  • rights:

    • absolute: to be informed, to access, to rectification, to data portability

    • restricted: to erasure, to restrict processing, to object

  • roles: controller, joint controller, processor

GDPR & AI: Privacy Impact Assessment (PIA)

A Privacy Impact Assessment must always be conducted when the processing could result in a high risk to the rights and freedoms of natural persons. E.g.,

  • scoring/profiling,

  • automatic decisions which lead to legal consequences for those impacted,

  • systematic monitoring,

  • processing of special personal data,

  • the merging or combining of data which was gathered by various processes,

  • data transfer to countries outside the EU/EEC

e-Privacy Directive (2002, amended 2009)

  • will be repealed by ePrivacy Regulation once that’s been adopted

  • based on Article 16 and Article 114 of the Treaty on the Functioning of the European Union (TFEU)1, which lays out rules for the internal market. 2

  • rules on data retention, cookies, e-mail communication (among others)

  • criticized as step back compared to the GDPR

Data Governance Act (11/2020)

  • covers: re-use of public data; “data sharing services/intermediaries”; “data altruism”

  • where data includes personal data

  • without prejudice to Regulation (EU) 2016/679 and Directive 2002/58/EC”

Indeed, considering that data protection is a fundamental right guaranteed by Article 8 of the Charter, and taking into account that one of the main purposes of the GDPR is to provide data subjects with control over personal data relating to them, the EDPB reiterates that personal data cannot be considered as a “tradeable commodity”. An important consequence of this is that, even if the data subject can agree to the processing of his or her personal data, he or she cannot waive his or her fundamental rights.” 3

Digital Services Act (12/2020)

  • defines a layered set of responsibilities for intermediaries (network infrastructure), hosting services, online platforms, and “very large online platforms”

  • Concerns:4

    • allows for use of AI systems categorizing individuals from biometrics according to ethnicity, gender, as well as political or sexual orientation

    • allows emotion recognition

    • allows targeted advertising

Digital Markets Act (12/2020)


  • sets up ex-ante rules for so-called gatekeepers, including

    • to refrain from combining personal data from different sources

    • to submit (on an annual basis) an independently audited description of any techniques deployed for profiling consumers

The Proposed Artificial Intelligence Act

Objectives


The general objective is to ‘ensure the proper functioning of the internal market by creating the conditions for the development and use of trustworthy artificial intelligence in the Union’ (impact assessment, p. 32).

The specific objectives are:

(i) to ensure that AI systems placed on the market and used are safe and respect existing rules on fundamental rights and Union values,

(ii) to ensure legal certainty to facilitate investment and innovation in AI,

(iii) to enhance governance and effective enforcement of existing rules on fundamental rights and safety requirements applicable to AI systems, and

(iv) to facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation. 5

Approach


Risk-based:

  • Unacceptable risk: prohibited (absolutely or with exceptions)

  • High risk: catalog of requirements

  • Limited risk: transparency requirements

  • Minimal risk: “encourage” and “facilitate” voluntary codes of conduct

“Ethics washing made in Europe” 6

Unacceptable risk (1, prohibited): Manipulation (1)


The placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm


“[a]n inaudible sound [played] in truck drivers’ cabins to push them to drive longer than healthy and safe [where] AI is used to find the frequency maximising this effect on drivers”


Unacceptable risk (1, prohibited): Manipulation (2)


The placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm


“[a] doll with integrated voice assistant [which] encourages a minor to engage in progressively dangerous behavior or challenges in the guise of a fun or cool game”

Unacceptable risk (1, prohibited): Problems 7


  • Harm: does not consider cumulative harm

  • Intent:

    • hard to prove

    • does not consider dual use

    • does not consider the usual newspeak / marketing strategies

    • leaves out dynamics in the user base

  • Does not add much to existing EU law (Unfair Commercial Practices Directive)

Unacceptable risk (2, prohibited): Social scoring


The sale or use of AI systems used by or on behalf of public authorities, to generate trustworthiness scores and which lead to either unjustified or disproportionate treatment of individuals or groups, or detrimental treatment which, while justifiable and proportionate, occurs in an unrelated context from the input data

Problems: 8

  • What is an unrelated context is up to definition

  • Public authorities: What about, e.g., privately controlled delivery, telecommunications or transport?

Unacceptable risk (3, partially prohibited): Biometric systems

Some uses of real-time biometric systems in publicly accessible spaces by law enforcement

Problems: 9

  • Usage, not selling
  • Only if by law enforcement

  • Real-time only

  • Exceptions: search for victims/missing children; threat to life or physical safety / terrorist attack; detection, localisation, identification or prosecution of a perpetrator or suspect of a crime with maximum sentence of at least 3 years

High risk: To health, safety and fundamental rights


… in a number of defined applications, products and sectors.


Based on and entwined with the New Legislative Framework (NLF) (the New Approach when introduced in 1985), a common EU approach to the regulation of certain products such as lifts, medical devices, personal protective equipment and toys

High risk: Applicability

  1. AI systems that are products or safety components (broadly construed) of products already covered by certain Union health and safety harmonisation legislation (such as toys, machinery, lifts, or medical devices).
  2. Standalone AI systems specified in an annex for use in eight fixed areas:
1.  biometric identification and categorisation (both remote and offline);

2.  management and operation of critical infrastructure;

3.  educational and vocational training;

4.  employment, worker management and access to self-employment;

5.  access to and enjoyment of essential services and benefits;

6.  law enforcement;

7.  migration, asylum and border management;

8.  administration of justice and democracy.

High risk: Requirements


What:

Risk management system; data quality criteria; accuracy; robustness and cybersecurity; technical documentation; logging; human oversight.

Who:

Providers, not users.

How:

Certification, or follow standards developed by 3 European Standardisation Organisations (ESOs).

High-risk: Problems 10

    • Can add sub-areas within these areas, if similar risk to an existing in-scope application, but cannot add new areasentirely.

    • Datasets only need to meet requirements “sufficiently” and “in view of the intended purpose of the system”

    • No explicit discussion of leakage of training data or other personal data from models

    • Unclear if essential stakeholders will be involved in the standardisation process

    • Certification bodies are private bodies!

    • Restrictions are on providers, however the provider may not know how system is used.

Limited risk: To health, safety and fundamental rights

For “limited-risk” applications, either provider or user have to fulfill transparency requirements.

Three categories are named:

  1. Bot disclosure. Responsible: Provider.
  2. Emotion recognition and biometric categorisation disclosure. Responsible: Provider.
  3. Synthetic content (“Deep Fake”) disclosure. Responsible: User.

Limited risk: Problems 11

  • Provider vs. user boundary not that sharp; each alone may lack information
  • Emotion recognition is anything but limited-risk.
    • Scientific models of emotion are under-complex, and data reflect this

    • Emotions, and their expression, are inseparable from social, cultural, situational context

    • Emotions are intimately related to human dignity

    • Emotions are often subject to moral judgement

    • In consequence, social credit effects arise

General Problems: Society & Human Rights

  • Concrete lists of applications, combined with the massive lobbying that has been going on, means that this is the rule of a dominant party, not people impacted/affected, or minorities.

  • Leaves out risks for groups of individuals or the society as a whole.

  • No reference to the individual affected / no regard to human rights.

  • Not enough focus on data protection.

References

Fairness, ethics, etc.

Selbst, A. et al.: Fairness and Abstraction in Sociotechnical Systems.

Stark, L., Hoey, J.: The Ethics of Emotion in Artificial Intelligence Systems.

EDRI, Beyond Debiasing

Cobbe, J., Singh, J.: Artificial Intelligence as a Service: Legal Responsibilities, Liabilities, and Policy Challenges.


  1. The second of the two main treaties of the European Union, a continuation of the original Treaty of Rome (1958). The other is the treaty of Lisbon (2009), replacing the Treaty of Maastricht (1993).↩︎

  2. as well as Article 7 of the Charter of Fundamental Rights↩︎

  3. EDPB-EDPS Statement 05/2021 on the Data Governance Act in light of the legislative developments↩︎

  4. https://edpb.europa.eu/system/files/2021-11/edpb_statement_on_the_digital_services_package_and_data_strategy_en.pdf↩︎

  5. https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/694212/EPRS_BRI(2021)694212_EN.pdf↩︎

  6. https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html↩︎

  7. Essentially following Veale & Borgesius, Demystifying the Draft EU Artificial Intelligence Act.↩︎

  8. Essentially following Veale & Borgesius, Demystifying the Draft EU Artificial Intelligence Act.↩︎

  9. Essentially following Veale & Borgesius, Demystifying the Draft EU Artificial Intelligence Act.↩︎

  10. Essentially following Veale & Borgesius, Demystifying the Draft EU Artificial Intelligence Act.↩︎

  11. See Stark & Hoey, The Ethics of Emotion in Artificial Intelligence Systems.↩︎

  12. See: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3824736).↩︎